235 research outputs found

    Soil chemistry and ecology on a restoration trajectory of a coastal sandplain forest, Punakaiki, New Zealand

    Get PDF
    This research was carried out in order to better understand the interactive role of vegetation and soil biogeochemistry on an ecological restoration trajectory on the West Coast of New Zealand. The Punakaiki Coastal Restoration Project (PCRP) was developed to restore degraded land to a more natural vegetation, resembling the original sandplain forest that has largely disappeared. Ecological restoration at the site, in terms of practice and research, has mainly focused on plant establishment and faunal colonization. The present study investigated whether restoration of soils is an integral part of this process. The project aimed to understand whether ecological restoration significantly modifies soils and, vice versa, whether physio-chemical variability of soils significantly influences the restoration trajectory. This research is based on a combination of laboratory, glasshouse and field-based studies. Incubation of native plant litters in soil was found to change soil chemical properties, including nitrogen (N) dynamics. It was found that two native species, Kunzea robusta and Olearia paniculata, may have the potential to ameliorate concerns associated with nitrate leaching and nitrous oxide production. Restored vegetation at the study site modified the dynamics of dissolved organic carbon (DOC) and mobile N in soil solution and increased rates of N mineralization. Interactions between vegetation and soil biota have significantly impacted these changes; changed soil conditions have also altered the composition of soil faunal communities. Study of soil pedogenesis revealed a formerly unknown spatial variability of the soil template. As soils have aged this has been reflected in a loss of soil total phosphorus (P), increase of occluded P and an increasing proportional importance of soil organic P. The dynamics of soil P fractionation on a short-term soil chronosequence across the site provided a better understanding of the response of soil biogeochemistry to the trajectory of ecological restoration on old and young soils. Key parameters were shown to be soil pH, organic matter, organic P and the variability of different P fractions. A detailed comparison of remnants of New Zealand Flax and Nikau Palm, and abandoned agricultural grassland, provided an opportunity to investigate the effects of these different types of vegetation on soil development. Multiple variables were found to be significant, including differences in plant physiology, soil organisms, hydrological gradient of an alluvial fan, and guano deposition, all of which modified soil P fractionation and secondary iron/aluminium (Fe/Al) minerals. In a glasshouse experiment, soil dehydrogenase activity and biologically based P (CaCl2-P, citrate-P and HCl-P) were significantly increased through interactions of earthworms and guano; the dynamic of soil P was modified by additional interactions with flax plants. The relationships between soil chemistry, biodiversity and plants on the restoration trajectory at PCRP were synthesized using multivariate analysis. A conceptual model was developed, elucidating changes of soil physio-chemistry on the restoration trajectory. The success of the PCRP restoration and establishment of flora and fauna are strongly influenced by soil variability, but the developing plant communities also substantially modify soil physio-chemistry. The study illustrates that a preliminary investigation of site-specific soils should be an essential part of restoration practice

    Architectural and Complier Mechanisms for Accelerating Single Thread Applications on Mulitcore Processors.

    Full text link
    Multicore systems have become the dominant mainstream computing platform. One of the biggest challenges going forward is how to efficiently utilize the ever increasing computational power provided by multicore systems. Applications with large amounts of explicit thread-level parallelism naturally scale performance with the number of cores. However, single-thread applications realize little to no gains from multicore systems. This work investigates architectural and compiler mechanisms to automatically accelerate single thread applications on multicore processors by efficiently exploiting three types of parallelism across multiple cores: instruction level parallelism (ILP), fine-grain thread level parallelism (TLP), and speculative loop level parallelism (LLP). A multicore architecture called Voltron is proposed to exploit different types of parallelism. Voltron can organize the cores for execution in either coupled or decoupled mode. In coupled mode, several in-order cores are coalesced to emulate a wide-issue VLIW processor. In decoupled mode, the cores execute a set of fine-grain communicating threads extracted by the compiler. By executing fine-grain threads in parallel, Voltron provides coarse-grained out-of-order execution capability using in-order cores. Architectural mechanisms for speculative execution of loop iterations are also supported under the decoupled mode. Voltron can dynamically switch between two modes with low overhead to exploit the best form of available parallelism. This dissertation also investigates compiler techniques to exploit different types of parallelism on the proposed architecture. First, this work proposes compiler techniques to manage multiple instruction streams to collectively function as a single logical stream on a conventional VLIW to exploit ILP. Second, this work studies compiler algorithms to extract fine-grain threads. Third, this dissertation proposes a series of systematic compiler transformations and a general code generation framework to expose hidden speculative LLP hindered by register and memory dependences in the code. These transformations collectively remove inter-iteration dependences that are caused by subsets of isolatable instructions, are unwindable, or occur infrequently. Experimental results show that proposed mechanisms can achieve speedups of 1.33 and 1.14 on 4 core machines by exploiting ILP and TLP respectively. The proposed transformations increase the DOALL loop coverage in applications from 27% to 61%, resulting in a speedup of 1.84 on 4 core systems.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58419/1/hongtaoz_1.pd

    Deep Generative Modeling on Limited Data with Regularization by Nontransferable Pre-trained Models

    Full text link
    Deep generative models (DGMs) are data-eager because learning a complex model on limited data suffers from a large variance and easily overfits. Inspired by the classical perspective of the bias-variance tradeoff, we propose regularized deep generative model (Reg-DGM), which leverages a nontransferable pre-trained model to reduce the variance of generative modeling with limited data. Formally, Reg-DGM optimizes a weighted sum of a certain divergence and the expectation of an energy function, where the divergence is between the data and the model distributions, and the energy function is defined by the pre-trained model w.r.t. the model distribution. We analyze a simple yet representative Gaussian-fitting case to demonstrate how the weighting hyperparameter trades off the bias and the variance. Theoretically, we characterize the existence and the uniqueness of the global minimum of Reg-DGM in a non-parametric setting and prove its convergence with neural networks trained by gradient-based methods. Empirically, with various pre-trained feature extractors and a data-dependent energy function, Reg-DGM consistently improves the generation performance of strong DGMs with limited data and achieves competitive results to the state-of-the-art methods

    Laser-induced incandescence particle image velocimetry (LII-PIV) for two-phase flow velocity measurement

    Get PDF
    We demonstrate the use of laser-induced incandescence (LII) of submicron tungsten carbide (WC) particles as a method for particle image velocimetry (PIV). The technique allows a single laser to be used for separate measurements of velocity of two phases in a droplet-laden flow. Submicron WC particles are intentionally seeded into a two-phase flow, and heated by a light sheet generated by a double-pulsed PIV laser operating at sufficiently high pulse energy. The small size and large absorption cross section allows particles to be heated up to several thousand degrees Kelvin to emit strong incandescence signals, whilst the laser-induced temperature increase in liquid droplets/large particles is negligible. The incandescence signal from WC and Mie scattering from droplets/large particles are separately captured by deploying different filters to a PIV camera. The consecutive images of the laser-induced incandescence (LII) are used to determine the velocity field of the gas-phase flow, and those of Mie scatter are used to extract the velocity of droplets/large particles. The proposed technique is demonstrated in an air jet first and compared with the result given by a normal PIV test, which shows that submicron WC particles can accurately follow the gas flow, and that the LII images can be used to perform cross-correlations. We then apply this technique on an ethanol droplet/air jet (non-reacting), demonstrating the resulting slip velocity between two phases. The proposed technique combining PIV and LII with a single laser requires little additional equipment, and is applicable to a much higher droplet/particle density than previously feasible. Finally, the possibility of applying this technique to a flame is demonstrated and discussed

    Automatic Design of Application Specific Instruction Set Extensions Through Dataflow Graph Exploration

    Full text link
    General-purpose processors are often incapable of achieving the challenging cost, performance, and power demands of high-performance applications. To meet these demands, most systems employ a number of hardware accelerators to off-load the computationally demanding portions of the application. As an alternative to this strategy, we examine customizing the computation capabilities of a processor for a particular application. The processor is extended with hardware in the form of a set of custom function units and instruction set extensions. To effectively identify opportunities for creating custom hardware, a dataflow graph design space exploration engine heuristically identifies candidate computation subgraphs without artificially constraining their size or shape. The engine combines estimates of performance gain, cost, and inherent limitations of the processor to grow candidate graphs in profitable directions while pruning unprofitable paths. This paper describes the dataflow graph exploration engine and evaluates its effectiveness across a set of embedded applications.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44572/1/10766_2004_Article_476941.pd
    corecore